We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image. High-fidelity 3D GAN inversion is inherently challenging due to the geometry-texture trade-off in 3D inversion, where overfitting to a single view input image often damages the estimated geometry during the latent optimization. To solve this challenge, we propose a novel pipeline that builds on the pseudo-multi-view estimation with visibility analysis. We keep the original textures for the visible parts and utilize generative priors for the occluded parts. Extensive experiments show that our approach achieves advantageous reconstruction and novel view synthesis quality over state-of-the-art methods, even for images with out-of-distribution textures. The proposed pipeline also enables image attribute editing with the inverted latent code and 3D-aware texture modification. Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
translated by 谷歌翻译
本文提出了Salenet-端到端卷积神经网络(CNN),用于使用前额叶脑电图(EEG)进行持续注意水平评估。提出了一种偏置驱动的修剪方法,以及小组卷积,全局平均池(GAP),接近零的修剪,重量聚类和模型压缩的量化,达到183.11x的总压缩比。在这项工作中,压缩的分配器在记录的6个受试者EEG数据库上获得了最新的主题无关的持续注意力分类精度为84.2%。该沙发在ARTIX-7 FPGA上实施,竞争功耗为0.11 W,能源效率为8.19 GOPS/W。
translated by 谷歌翻译
Subject to the huge semantic gap between natural and formal languages, neural semantic parsing is typically bottlenecked by its complexity of dealing with both input semantics and output syntax. Recent works have proposed several forms of supplementary supervision but none is generalized across multiple formal languages. This paper proposes a unified intermediate representation (IR) for graph query languages, named GraphQ IR. It has a natural-language-like expression that bridges the semantic gap and formally defined syntax that maintains the graph structure. Therefore, a neural semantic parser can more precisely convert user queries into GraphQ IR, which can be later losslessly compiled into various downstream graph query languages. Extensive experiments on several benchmarks including KQA Pro, Overnight, GrailQA, and MetaQA-Cypher under standard i.i.d., out-of-distribution, and low-resource settings validate GraphQ IR's superiority over the previous state-of-the-arts with a maximum 11% accuracy improvement.
translated by 谷歌翻译
已被证明在改善神经电机翻译(NMT)系统方面有效的深度编码器,但是当编码器层数超过18时,它达到了翻译质量的上限。更糟糕的是,更深的网络消耗了很多内存,使其无法实现有效地训练。在本文中,我们呈现了共生网络,其包括完整的网络作为共生主网络(M-Net)和另一个具有相同结构的共享子网,但层数较少为共生子网(S-Net)。我们在变压器深度(M-N)架构上采用共生网络,并在NMT中定义M-Net和S-Net之间的特定正则化损耗$ \ mathcal {l} _ {\ tau} $。我们对共生网络进行联合培训,并旨在提高M净性能。我们拟议的培训策略在CMT'14 en-> De,De-> EN和EN-> FR任务的经典培训下将变压器深(12-6)改善了0.61,0.49和0.69 BLEU。此外,我们的变压器深(12-6)甚至优于经典变压器深度(18-6)。
translated by 谷歌翻译
最近,非自动增加(NAT)模型并行地预测输出,与自回归(AT)模型相比,实现了产生速度的大量改进。在对原始数据上表现更差的同时,大多数NAT模型都被培训为在教师模型生成的蒸馏数据上的学生模型,称为序列级知识蒸馏。提高模型性能的有效培训策略是自蒸馏混合(SDM)培训,预先训练原始数据模型,通过预先训练的模型本身产生蒸馏数据,最后重新列举模型原始数据和蒸馏数据的组合。在这项工作中,我们的目标是查看NAT模型的SDM,但发现直接采用SDM到NAT模型在翻译质量方面没有改进。通过仔细分析,我们观察失效与教师模型与NAT学生模型的建模和确认偏差相关。基于这些发现,我们提出了一种增强的策略,通过向经典SDM添加两个阶段来提高名为SDMRT的策略:一个是在自蒸馏数据上进行预重磅,另一个是对滤波后的教师蒸馏数据进行微调。我们的结果在多个NAT模型上以0.6至1.2 bleu表示基础。作为另一个奖励,对于迭代细化NAT模型,我们的方法可以在半迭代号内倾斜基线,这意味着2x加速度。
translated by 谷歌翻译
我们介绍了一种新的数据驱动方法,具有基于物理的前沿,从单个偏振图像到场景级正常估计。来自偏振(SFP)的现有形状主要专注于估计单个物体的正常,而不是野外的复杂场景。高质量场景级SFP的关键障碍是复杂场景中缺乏现实世界的SFP数据。因此,我们贡献了第一个现实世界场景级SFP数据集,具有配对输入偏振图像和地理正常映射。然后,我们提出了一种基于学习的框架,具有多头自我注意模块和观察编码,该框架被设计为处理由场景级SFP中的复杂材料和非正交投影引起的增加的偏振模糊。由于偏振光和表面法线之间的关系不受距离的影响,我们训练的模型可以广泛地展开到远场户外场景。实验结果表明,我们的方法在两个数据集中显着优于现有的SFP模型。我们的数据集和源代码将公开可用于\ url {https://github.com/chenyanglei/sfp-wild}。
translated by 谷歌翻译
对于子伽马噪声下的差异隐私,我们通过常规链路函数推导出一类具有二进制值的网络模型的渐近属性。在本文中,我们根据具有离散拉普拉斯机制的一般嘈杂机制来释放二元网络的程度序列。我们建立渐近结果,包括参数估计器的一致性和渐近常态,当参数的数量进入无限远在一类网络模型中时。提供模拟和实际数据示例以说明渐近结果。
translated by 谷歌翻译
在知识库(复杂KBQA)上回答的复杂问题是具有挑战性的,因为它需要各种组成推理功能,例如多跳推断,属性比较,集合操作。现有的基准有一些缺点,这些缺点限制了复杂的KBQA的发展:1)它们仅提供质量检查对而没有明确的推理过程; 2)问题的多样性或规模很差。为此,我们介绍了KQA Pro,这是一个用于复杂KBQA的数据集,包括〜120k多样化的自然语言问题。我们引入了一种构图和可解释的编程语言KOPL,以表示复杂问题的推理过程。对于每个问题,我们都提供相应的KOPL程序和SPARQL查询,因此KQA Pro可用于KBQA和语义解析任务。实验结果表明,SOTA KBQA方法无法像当前数据集上的KQA Pro上实现有希望的结果,这表明KQA Pro具有挑战性,复杂的KBQA需要进一步的研究工作。我们还将KQA Pro视为用于测试多种推理技能的诊断数据集,对现有模型进行彻底评估,并讨论复杂KBQA的进一步说明。我们的代码和数据集可以从https://github.com/shijx12/kqapro_baselines获得。
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Embedding words in vector space is a fundamental first step in state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined vector representations to improve generalization by co-locating similar words in vector space. For instance, Word2Vec is a self-supervised predictive model that captures the context of words using a neural network. Similarly, GLoVe is a popular unsupervised model incorporating corpus-wide word co-occurrence statistics. Such word embedding has significantly boosted important NLP tasks, including sentiment analysis, document classification, and machine translation. However, the embeddings are dense floating-point vectors, making them expensive to compute and difficult to interpret. In this paper, we instead propose to represent the semantics of words with a few defining words that are related using propositional logic. To produce such logical embeddings, we introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised. The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks. Furthermore, we investigate the interpretability of our embedding using the logical representations acquired during training. We also visualize word clusters in vector space, demonstrating how our logical embedding co-locate similar words.
translated by 谷歌翻译